Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
Phys Med Biol ; 66(24)2021 12 31.
Article in English | MEDLINE | ID: covidwho-2287037

ABSTRACT

Objective.Lesions of COVID-19 can be clearly visualized using chest CT images, and hence provide valuable evidence for clinicians when making a diagnosis. However, due to the variety of COVID-19 lesions and the complexity of the manual delineation procedure, automatic analysis of lesions with unknown and diverse types from a CT image remains a challenging task. In this paper we propose a weakly-supervised framework for this task requiring only a series of normal and abnormal CT images without the need for annotations of the specific locations and types of lesions.Approach.A deep learning-based diagnosis branch is employed for classification of the CT image and then a lesion identification branch is leveraged to capture multiple types of lesions.Main Results.Our framework is verified on publicly available datasets and CT data collected from 13 patients of the First Affiliated Hospital of Shantou University Medical College, China. The results show that the proposed framework can achieve state-of-the-art diagnosis prediction, and the extracted lesion features are capable of distinguishing between lesions showing ground glass opacity and consolidation.Significance.The proposed approach integrates COVID-19 positive diagnosis and lesion analysis into a unified framework without extra pixel-wise supervision. Further exploration also demonstrates that this framework has the potential to discover lesion types that have not been reported and can potentially be generalized to lesion detection of other chest-based diseases.


Subject(s)
COVID-19 , Humans , Lung , SARS-CoV-2 , Thorax , Tomography, X-Ray Computed
2.
17th European Conference on Computer Vision, ECCV 2022 ; 13807 LNCS:537-551, 2023.
Article in English | Scopus | ID: covidwho-2263254

ABSTRACT

This paper presents our solution for the 2nd COVID-19 Severity Detection Competition. This task aims to distinguish the Mild, Moderate, Severe, and Critical grades in COVID-19 chest CT images. In our approach, we devise a novel infection-aware 3D Contrastive Mixup Classification network for severity grading. Specifically, we train two segmentation networks to first extract the lung region and then the inner lesion region. The lesion segmentation mask serves as complementary information for the original CT slices. To relieve the issue of imbalanced data distribution, we further improve the advanced Contrastive Mixup Classification network by weighted cross-entropy loss. On the COVID-19 severity detection leaderboard, our approach won the first place with a Macro F1 Score of 51.76%. It significantly outperforms the baseline method by over 11.46%. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

3.
Ieee Transactions on Emerging Topics in Computational Intelligence ; 2022.
Article in English | Web of Science | ID: covidwho-2192093

ABSTRACT

Recently under the condition of reducing nucleic acid testing for COVID-19 in large population, the computer-aided diagnosis with the chest computed tomography (CT) image has become increasingly important in differential diagnosis of community-acquired pneumonia (CAP) and COVID-19. In prac-tice, there usually exist a mass of unlabeled CT images, especially in regions without adequate medical resources, and the existing diagnosis methods cannot take advantage of the useful information among them. Therefore, it is practical and urgent need to develop a computer-aided diagnosis model that can effectively exploit both labeled and unlabeled samples. To this end, in this paper, we pro -pose a semi-supervised multi-view fusion method for the diagnosis of COVID-19. It explores both the discriminative features from labeled samples and the structure information from unlabeled samples and fuses multi-view features extracted from CT images, including image feature, statistical feature, and lesions specific feature, for improving the diagnostic performance. Specifically, in the proposed model, we utilize semi-supervised learning technique with pairwise constraint regularization to learn the model with both labeled samples and unlabeled samples. Simultaneously, we employ low-rank multi-view constraint to capture latent comple-mentary information among different features from CT images. Experimental results show that the proposed method outperforms the state-of-the-art methods in differential diagnosis of CAP vs. COVID-19.

4.
Journal of Image and Graphics ; 27(3):774-783, 2022.
Article in Chinese | Scopus | ID: covidwho-1789676

ABSTRACT

Objective: Human chest computed tomography (CT) image analysis is a key measure for diagnosing human lung diseases. However, the current scanned chest CT images might not meet the requirement of diagnosing lung diseases accurately. Medical image enhancement is an effective technique to improve the image quality and has been used in many clinical applications, such as knee joint disease detection, breast lesion segmentation and corona virus disease 2019(COVID-19) detection. Developing new enhancement algorithms is essential to improve the quality of chest CT images. A simple yet effective chest CT image enhancement algorithm is presented based on basic information preservation and detail enhancement. Method: A good chest CT image enhancement algorithm should well improve the clarity of edges or speckles in the image, while preserving much original structural information. Our human chest CT image enhancement algorithm is developed as follows. First, this algorithm exploits the advanced guided filter to decompose the CT image into multiple layers, including a base layer and multiple different scales of detail layers. Next, an entropy-based weight strategy is adopted to fuse the detail layers, which could well strengthen the informative details and suppress the texture-less layers. Afterwards, the fused detail layer is further strengthened based on an enhancement coefficient. In the end, the enhanced detail layer and the original base layer are integrated to generate the targeted chest CT image. The proposed algorithm could well enhance the details of the chest CT image, as well as transfer much original basic structural information to the enhanced image. Moreover, with the help of our algorithm, the surgeons can inspect more clear medical images without impacting their perception of the pathology information. In order to verify the effectiveness of our proposed algorithm, we have constructed a chest CT image dataset, which is composed of 20 sets/3 209 chest CT images, and then evaluated our algorithm and five state-of-the-art image enhancement algorithms on this large-scale dataset. In addition, the experiments are performed in both qualitative and quantitative ways. Result: Two qualitative comparison cases demonstrate that our algorithm has mainly strengthened the useful details, while effectively suppressing the background information. As for the five comparison algorithms, histogram equalization(HE) and contrast limited adaptive histogram equalization(CLAHE) usually change the whole image intensities with large variation and degrade the image quality as compared to the original image. Alternative toggle operator(AO) could enhance the chest CT image with much better visual quality than HE and CLAHE, but it has excessively enhanced both image details and background noises. Low-light image enhancement(LIME) and robust retinex model(RRM) usually increase the intensities of the whole image and result in images of inappropriate contrast. The quantitative average standard deviation(STD), structural similarity metric(SSIM), peak signal to noise ratio(PSNR) values of our algorithm are significantly greater than those of the other five comparison algorithms (i.e., increased by 4.95, 0.16, 4.47, respectively) on our constructed chest CT image dataset. To be specific, greater average STD value of our algorithm indicates it has enhanced images with more clear details compared to the other five comparison algorithms. Larger average SSIM and PSNR values of our algorithm validate that it has preserved more basic structural information from the original image than the other five comparison algorithms. Meanwhile, the proposed algorithm only costs about 0.10 seconds to enhance a single CT image, which indicates the proposed algorithm has great potential to be efficiently applied in the real clinical scenarios. Overall, our algorithm achieves the best results amongst all the six image enhancement algorithms in terms of both visual quality and quantitative metrics. Conclusion: In this study, we have developed a simple yet effective hum n chest CT image enhancement algorithm, which can effectively enhance the textural details of chest CT images while preserving a large amount of original basic structural information. With the help of our enhanced human chest CT images, the surgeons could diagnose lung diseases more accurately. Moreover, the proposed algorithm owns good generalization ability, and is capable of well enhancing CT images scanned from other sites and even other modalities of images. © 2022, Editorial Office of Journal of Image and Graphics. All right reserved.

5.
Arab J Sci Eng ; : 1-12, 2022 Jan 30.
Article in English | MEDLINE | ID: covidwho-1661751

ABSTRACT

The COVID-19 outbreak requires urgent public health attention throughout the world due to having its fast human to human transmission. As per the guidelines of the World Health Organization, rapid testing, vaccination, and isolation are the only options to break the chain of COVID-19 infection. Lung computed tomography (CT) plays a prime role in the accurate detection of COVID-19. For detection and pattern analysis of COVID-19, here an improved Sobel quantum edge extraction with non-maximum suppression and adaptive threshold (ISQEENSAT) has been employed to extract clinical information of infected lungs suppressing minimal noises present in the chest. In comparison with conventional classical edge extraction operators, the proposed technique can detect more sharp and accurate clinical edges of peripheral ground-glass opacity that appeared in the initial stage of COVID-19 patients. The edge extraction results assure the detection and differentiation of COVID-19 infection. ISQEENSAT can be a useful tool for assisting COVID-19 analysis and can help the physician to detect the region how much it has infected. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s13369-021-06511-9.

6.
BMC Res Notes ; 14(1): 178, 2021 May 12.
Article in English | MEDLINE | ID: covidwho-1225782

ABSTRACT

OBJECTIVES: The ongoing Coronavirus disease 2019 (COVID-19) pandemic has drastically impacted the global health and economy. Computed tomography (CT) is the prime imaging modality for diagnosis of lung infections in COVID-19 patients. Data-driven and Artificial intelligence (AI)-powered solutions for automatic processing of CT images predominantly rely on large-scale, heterogeneous datasets. Owing to privacy and data availability issues, open-access and publicly available COVID-19 CT datasets are difficult to obtain, thus limiting the development of AI-enabled automatic diagnostic solutions. To tackle this problem, large CT image datasets encompassing diverse patterns of lung infections are in high demand. DATA DESCRIPTION: In the present study, we provide an open-source repository containing 1000+ CT images of COVID-19 lung infections established by a team of board-certified radiologists. CT images were acquired from two main general university hospitals in Mashhad, Iran from March 2020 until January 2021. COVID-19 infections were ratified with matching tests including Reverse transcription polymerase chain reaction (RT-PCR) and accompanying clinical symptoms. All data are 16-bit grayscale images composed of 512 × 512 pixels and are stored in DICOM standard. Patient privacy is preserved by removing all patient-specific information from image headers. Subsequently, all images corresponding to each patient are compressed and stored in RAR format.


Subject(s)
COVID-19 , Artificial Intelligence , COVID-19 Testing , Humans , Iran , Lung , SARS-CoV-2 , Tomography, X-Ray Computed
7.
Appl Intell (Dordr) ; 51(12): 8985-9000, 2021.
Article in English | MEDLINE | ID: covidwho-1198469

ABSTRACT

The rapid spread of coronavirus disease has become an example of the worst disruptive disasters of the century around the globe. To fight against the spread of this virus, clinical image analysis of chest CT (computed tomography) images can play an important role for an accurate diagnostic. In the present work, a bi-modular hybrid model is proposed to detect COVID-19 from the chest CT images. In the first module, we have used a Convolutional Neural Network (CNN) architecture to extract features from the chest CT images. In the second module, we have used a bi-stage feature selection (FS) approach to find out the most relevant features for the prediction of COVID and non-COVID cases from the chest CT images. At the first stage of FS, we have applied a guided FS methodology by employing two filter methods: Mutual Information (MI) and Relief-F, for the initial screening of the features obtained from the CNN model. In the second stage, Dragonfly algorithm (DA) has been used for the further selection of most relevant features. The final feature set has been used for the classification of the COVID-19 and non-COVID chest CT images using the Support Vector Machine (SVM) classifier. The proposed model has been tested on two open-access datasets: SARS-CoV-2 CT images and COVID-CT datasets and the model shows substantial prediction rates of 98.39% and 90.0% on the said datasets respectively. The proposed model has been compared with a few past works for the prediction of COVID-19 cases. The supporting codes are uploaded in the Github link: https://github.com/Soumyajit-Saha/A-Bi-Stage-Feature-Selection-on-Covid-19-Dataset.

8.
Comput Struct Biotechnol J ; 19: 1391-1399, 2021.
Article in English | MEDLINE | ID: covidwho-1116510

ABSTRACT

As a recent global health emergency, the quick and reliable diagnosis of COVID-19 is urgently needed. Thus, many artificial intelligence (AI)-base methods are proposed for COVID-19 chest CT (computed tomography) image analysis. However, there are very limited COVID-19 chest CT images publicly available to evaluate those deep neural networks. On the other hand, a huge amount of CT images from lung cancer are publicly available. To build a reliable deep learning model trained and tested with a larger scale dataset, the proposed model builds a public COVID-19 CT dataset, containing 1186 CT images synthesized from lung cancer CT images using CycleGAN. Additionally, various deep learning models are tested with synthesized or real chest CT images for COVID-19 and Non-COVID-19 classification. In comparison, all models achieve excellent results in accuracy, precision, recall and F1 score for both synthesized and real COVID-19 CT images, demonstrating the reliable of the synthesized dataset. The public dataset and deep learning models can facilitate the development of accurate and efficient diagnostic testing for COVID-19.

SELECTION OF CITATIONS
SEARCH DETAIL